Перевод: с русского на английский

с английского на русский

subspace iteration method

См. также в других словарях:

  • Power iteration — In mathematics, the power iteration is an eigenvalue algorithm: given a matrix A , the algorithm will produce a number lambda; (the eigenvalue) and a nonzero vector v (the eigenvector), such that Av = lambda; v .The power iteration is a very… …   Wikipedia

  • Generalized minimal residual method — In mathematics, the generalized minimal residual method (usually abbreviated GMRES) is an iterative method for the numerical solution of a system of linear equations. The method approximates the solution by the vector in a Krylov subspace with… …   Wikipedia

  • Arnoldi iteration — In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of iterative methods. Arnoldi finds the eigenvalues of general (possibly non Hermitian) matrices; an analogous method for Hermitian matrices is …   Wikipedia

  • Derivation of the conjugate gradient method — In numerical linear algebra, the conjugate gradient method is an iterative method for numerically solving the linear system where is symmetric positive definite. The conjugate gradient method can be derived from several different perspectives,… …   Wikipedia

  • Krylov subspace — In linear algebra the Krylov subspace generated by an n by n matrix, A , and an n vector, b , is the subspace mathcal{K} n spanned by the vectors of the Krylov sequence:::mathcal{K} n = operatorname{span} , { b, Ab, A^2b, ldots, A^{n 1}b }. , It… …   Wikipedia

  • Conjugate residual method — The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It s a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence… …   Wikipedia

  • Biconjugate gradient method — In mathematics, more specifically in numerical analysis, the biconjugate gradient method is an algorithm to solve systems of linear equations :A x= b.,Unlike the conjugate gradient method, this algorithm does not require the matrix A to be self… …   Wikipedia

  • Henk van der Vorst — Hendrik Albertus (Henk) van der Vorst (born on May 5, 1944) is a Dutch mathematician, Emeritus Professor of Numerical Analysis at Utrecht University. According to the Institute for Scientific Information (ISI), his paperCitation author = H.A. van …   Wikipedia

  • Principal component analysis — PCA of a multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The vectors shown are the eigenvectors of the covariance matrix scaled by… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • Gram–Schmidt process — In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthogonalizing a set of vectors in an inner product space, most commonly the Euclidean space R n . The Gram–Schmidt process takes a… …   Wikipedia

Поделиться ссылкой на выделенное

Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»